|
|
Stefan Viljoen <spamnot@ wrote:
> Would have been nice if it rendered twice as fast, though...
Think about it for a moment. Exactly *what* would be the thing
in a 64-bit binary which could possibly make it twice as fast compared
to a 32-bit binary?
Would adding two numbers together be twice as fast in a 64-bit
binary than in a 32-bit one? Why would it? What would 64-bitness
have to do with the speed at which eg. additions are made? You can
naturally have twice the precision in the (integer) addition, but
what exactly could make it *faster*?
Where does this idea that increasing bitness can make a binary
faster come from? I don't get it. It's a bit like saying that reading
a book is faster if we make the pages twice as big. It doesn't make
any sense.
Most processor which support both 32-bit and 64-bit binaries do
not run the 64-bit ones faster. And of course: Why would they? Where
could any speedup come from?
In fact, eg. the Sun UltraSparc processors run 64-bit binaries
slightly slower than 32-bit binaries. This is because pointers are
twice as big and the data cache of the processor fills faster. This
slowdown isn't big, though; something like 5% or so.
Now, the AMD64 is a special case: 64-bit binaries optimized for the
AMD64 run slightly faster than 32-bit binaries optimized for the same
processor. However, this speedup has nothing to do with the amount of
bits used. It is because in 64-bit mode the CPU has enhancements which
the binary can use for faster performance (the biggest enhancement is
a bunch of additional CPU registers). This is why it is quite
recommended to run a 64-bit OS and 64-bit binaries on an AMD64 in
order to get full performance.
Post a reply to this message
|
|